
Massively-parallel and GPU-accelerated DMD: an Implementation in the Python Library Heat
Please login to view abstract download link
Dynamic Mode Decomposition (DMD) is well-known as a valuable tool in computational science and engineering. If the underlying data set is very large, applying DMD or related methods may be infeasible (in terms of memory and/or run time requirements) on a “standard” workstation. Adapting computational workflows to be run on compute clusters, however, can be challenging, in particular for non-experts in High-Performance Computing (HPC). To address this issue, we are developing a scalable and GPU-accelerated implementation of DMD within the project ESAPCA. Our implementation builds on and integrates into the Python library Heat, a library being developed by DLR, Jülich Supercomputing Center, and Karlsruhe Institute of Technology, enabling massively-parallel array computing and machine learning on CPU/GPU-clusters. The usage especially by HPC-non-experts is facilitated through a simple NumPy/scikit-learn-like API. In this talk, we discuss the so far implemented DMD-features, their design, and the corresponding challenges. Numerical results concerning the scalability of our approach will be shown. Moreover, we give an outlook to on-going and future applications. Acknowledgement and Disclaimer: This research was supported by the European Space Agency through the Open Space Innovation Platform (https://ideas.esa.int) as an Early Technology Development Agreement and carried out under the Discovery Program ESA Early Technology Development (Research Agreement No. 4000144045/24/NL/GLC/ov). The view expressed in this publication can in no way be taken to reflect the official opinion of the European Space Agency.